22 research outputs found

    Signatures of criticality arise from random subsampling in simple population models

    No full text
    <div><p>The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat—a measure of population statistics derived from thermodynamics—has been used to suggest that neural populations are optimized to operate at a “critical point”. However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect “signatures of criticality”, and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.</p></div

    Signatures of criticality in a simulation of retinal ganglion cell activity.

    No full text
    <p><b>a)</b> Simulation schematic: Neurons have linear stimulus selectivity with centre-surround receptive fields and correlated Gaussian noise. <b>b)</b> Statistics of simulated population activity. Histograms of firing rates (left), correlation coefficients (centre) and frequency of population spike-counts (right). <b>c)</b> Estimation-error (normalised mean square error) in pairwise covariances as function of sample size, averaged across 10 populations of size <i>n</i> = 100. Rao-Blackwellization reduces the number of samples needed for a given level of accuracy by a factor ≈ 3. <b>d)</b> Quality of fit: Population models (here <i>n</i> = 100, example population) capture the mean firing rates (left), covariances (centre) and spike-counts (right). <b>e)</b> Divergence of specific heat: Average and individual traces for 10 randomly sampled populations for each of 6 different population sizes, exhibiting divergence of specific heat and peak in heat near unit temperature. Inset: Specific heat at unit temperature and at peak vs. population size. <b>f)</b> Specific heat for different temperatures and subsampled population sizes (here denoted by capital letter <i>N</i>) in recordings of salamander retinal ganglion cells responding to naturalistic stimuli, reproduced from [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005718#pcbi.1005718.ref007" target="_blank">7</a>].</p

    How can one relate theories of thermodynamic criticality to the statistics of neural data?

    No full text
    <p>In physical systems, the divergence of specific heat with system size can be interpreted as the system being at a critical point. We here study an analysis approach that has been proposed in order to search for similar signatures of criticality in the statistics of neural population activity. In this approach, different populations are subsampled from a large recording and summary statistics are extracted for each subpopulation (e.g. firing rates, correlations and population spike count statistics). Subsequently, maximum entropy models are fit to these data which assign a probability to each possible spike-pattern. Exploiting the mathematical relationship between the log-variance of probabilities (in statistics) and the specific heat (in thermodynamics) then allows one to compute and study the behaviour of the specific heat with population size. The goal of this study is to determine under which conditions (i.e., for which firing rates and correlations) such an analysis would report that the system is critical. To this end, we apply this approach to a simulation of neural population activity and analytically tractable models.</p

    Relationship between correlations and criticality.

    No full text
    <p><b>a)</b> Specific heat traces for beta-binomial model, different correlation strengths and population sizes. Heat traces are qualitatively similar, but differ markedly quantitatively (see y-axes). <b>b)</b> Specific heat diverges linearly, and the slope depends on the strength of correlations. <b>c)</b> Divergence rate of specific heat for beta-binomial model as a function of correlation strength (left). Rightmost point (at infinity) corresponds to analytical prediction of large-<i>n</i> behaviour. Divergence rates are strictly increasing with correlation strength (right) which is captured by a weak-correlation approximation (dashed line). <b>d)</b> Specific heat increases with correlation in the K-pairwise maximum entropy model: average and individual traces for 10 randomly subsampled populations for 6 different population sizes. Left to right: checkerboard, natural images and full-field flicker stimuli presented to the population. Correlation strengths denote mean correlation coefficient in each population.</p

    Random subsampling leads to criticality-inducing correlations.

    No full text
    <p><b>a)</b> Illustration: A population with 100 neurons and infinite-range correlations, the average correlation between any pair of neurons is close to 0.05. Correlation as function of inter-neuron distance (left) and full correlation matrix (right). <b>b)</b> Average correlation in subpopulation of different size <i>n</i> (left) and specific heat at <i>T</i> = 1 as function of <i>n</i> (right), when neurons are sampled from 1 to 100 (blue). Random sampling gives identical results (gray). <b>c)</b> Population with limited-range correlations, same plots as in panel a. <b>d)</b> Left: Average correlation as function of population size for spatially structured sampling (green) and uniform subsampling (gray). Right: Specific heat at <i>T</i> = 1 grows linearly for random subsampling, but shows signs of saturation for spatially structured sampling. <b>e)</b> Average correlation as function of inter-neuron distance in RGC simulation. For checkerboard and natural images, correlations drop to 0 for large distances. <b>f)</b> Specific heat at <i>T</i> = 1 for different stimulation conditions, for spatially structured (colour) or random subsampling (gray).</p

    SBI results for the two-parameter DSO rule.

    No full text
    Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the ‘posterior distribution over parameters conditioned on the data’) that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.</div

    Prior and posterior predictive distributions.

    No full text
    Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the ‘posterior distribution over parameters conditioned on the data’) that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.</div

    Conditional posterior distributions.

    No full text
    Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the ‘posterior distribution over parameters conditioned on the data’) that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.</div

    Formulating wiring rules in the rat barrel cortex as simulation-based models.

    No full text
    (A) The structural model of the rat barrel cortex contains digital reconstructions of position, morphology, and subcellular features of several neuron types in the barrel cortex and the ventral posterior medial nucleus (VPM) of the thalamus. (B) We formulate a wiring rule that predicts the probability of a synapse between two neurons from their dense structural overlap (DSO), i.e., the product of the number of pre- and postsynaptic structural features, normalized by all postsynaptic features in a given subvolume (postAll). (C) By applying the wiring rule to every neuron-pair subvolume combination of the model to connection probabilities and then sampling corresponding synapse counts from a Poisson distribution (left), we can simulate a barrel cortex connectome. To compare the simulated data to measurements, we calculate population connection probabilities between VPM and barrel cortex cell types as they have been measured experimentally (right). (D) To obtain a simulation-based model, we introduce parameters to the rule and define a prior distribution (left) such that each parameter combination corresponds to a different rule configuration and leads to different simulated connection probabilities (right, grey; measured data in black, [34, 35]).</p

    Prior predictive distribution for the DSO rule.

    No full text
    Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the ‘posterior distribution over parameters conditioned on the data’) that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.</div
    corecore